For small training set sizes $P$, the generalization error of wide neural networks is well-approximated by the error of an infinite width neural network (NN), either in the kernel or mean-field/feature-learning regime. However, after a critical sample size $P^*$, we empirically find the finite-width network generalization becomes worse than that of the infinite width network. In this work, we empirically study the transition from infinite-width behavior to this variance limited regime as a function of sample size $P$ and network width $N$. We find that finite-size effects can become relevant for very small dataset sizes on the order of $P^* \sim \sqrt{N}$ for polynomial regression with ReLU networks. We discuss the source of these effects using an argument based on the variance of the NN's final neural tangent kernel (NTK). This transition can be pushed to larger $P$ by enhancing feature learning or by ensemble averaging the networks. We find that the learning curve for regression with the final NTK is an accurate approximation of the NN learning curve. Using this, we provide a toy model which also exhibits $P^* \sim \sqrt{N}$ scaling and has $P$-dependent benefits from feature learning.
translated by 谷歌翻译
我们分析了通过梯度流通过自洽动力场理论训练的无限宽度神经网络中的特征学习。我们构建了确定性动力学阶参数的集合,该参数是内部产物内核,用于在成对的时间点中,每一层中隐藏的单位激活和梯度,从而减少了通过训练对网络活动的描述。这些内核顺序参数共同定义了隐藏层激活分布,神经切线核的演变以及因此输出预测。我们表明,现场理论推导恢复了从Yang和Hu(2021)获得张量程序的无限宽度特征学习网络的递归随机过程。对于深线性网络,这些内核满足一组代数矩阵方程。对于非线性网络,我们提供了一个交替的采样过程,以求助于内核顺序参数。我们提供了与各种近似方案的自洽解决方案的比较描述。最后,我们提供了更现实的设置中的实验,这些实验表明,在CIFAR分类任务上,在不同宽度上保留了CNN的CNN的损耗和内核动力学。
translated by 谷歌翻译
懒惰培训制度中的神经网络收敛到内核机器。在丰富的特征学习制度中可以在丰富的特征学习制度中可以使用数据依赖性内核来学习内核机器吗?我们证明,这可以是由于我们术语静音对准的现象,这可能需要网络的切线内核在特征内演变,而在小并且在损失明显降低,并且之后仅在整体尺度上生长。我们表明这种效果在具有小初始化和白化数据的同质神经网络中进行。我们在线性网络壳体提供了对这种效果的分析处理。一般来说,我们发现内核在训练的早期阶段开发了低级贡献,然后在总体上发展,产生了与最终网络的切线内核的内核回归解决方案等同的函数。内核的早期光谱学习取决于深度。我们还证明了非白化数据可以削弱无声的对准效果。
translated by 谷歌翻译
标准情况被出现为对构成组的身份保留转换的物体表示的理想性质,例如翻译和旋转。然而,由组标准规定的表示的表示的表现仍然不完全理解。我们通过提供封面函数计数定理的概括来解决这个差距,这些定理量化了可以分配给物体的等异点的线性可分离和组不变二进制二分层的数量。我们发现可分离二分法的分数由由组动作固定的空间的尺寸决定。我们展示了该关系如何扩展到卷积,元素 - 明智的非线性和全局和本地汇集等操作。虽然其他操作不会改变可分离二分法的分数,但尽管是高度非线性操作,但是局部汇集减少了分数。最后,我们在随机初始化和全培训的卷积神经网络的中间代表中测试了我们的理论,并找到了完美的协议。
translated by 谷歌翻译
The generalisation performance of a convolutional neural networks (CNN) is majorly predisposed by the quantity, quality, and diversity of the training images. All the training data needs to be annotated in-hand before, in many real-world applications data is easy to acquire but expensive and time-consuming to label. The goal of the Active learning for the task is to draw most informative samples from the unlabeled pool which can used for training after annotation. With total different objective, self-supervised learning which have been gaining meteoric popularity by closing the gap in performance with supervised methods on large computer vision benchmarks. self-supervised learning (SSL) these days have shown to produce low-level representations that are invariant to distortions of the input sample and can encode invariance to artificially created distortions, e.g. rotation, solarization, cropping etc. self-supervised learning (SSL) approaches rely on simpler and more scalable frameworks for learning. In this paper, we unify these two families of approaches from the angle of active learning using self-supervised learning mainfold and propose Deep Active Learning using BarlowTwins(DALBT), an active learning method for all the datasets using combination of classifier trained along with self-supervised loss framework of Barlow Twins to a setting where the model can encode the invariance of artificially created distortions, e.g. rotation, solarization, cropping etc.
translated by 谷歌翻译
The lack of standardization is a prominent issue in magnetic resonance (MR) imaging. This often causes undesired contrast variations due to differences in hardware and acquisition parameters. In recent years, MR harmonization using image synthesis with disentanglement has been proposed to compensate for the undesired contrast variations. Despite the success of existing methods, we argue that three major improvements can be made. First, most existing methods are built upon the assumption that multi-contrast MR images of the same subject share the same anatomy. This assumption is questionable since different MR contrasts are specialized to highlight different anatomical features. Second, these methods often require a fixed set of MR contrasts for training (e.g., both Tw-weighted and T2-weighted images must be available), which limits their applicability. Third, existing methods generally are sensitive to imaging artifacts. In this paper, we present a novel approach, Harmonization with Attention-based Contrast, Anatomy, and Artifact Awareness (HACA3), to address these three issues. We first propose an anatomy fusion module that enables HACA3 to respect the anatomical differences between MR contrasts. HACA3 is also robust to imaging artifacts and can be trained and applied to any set of MR contrasts. Experiments show that HACA3 achieves state-of-the-art performance under multiple image quality metrics. We also demonstrate the applicability of HACA3 on downstream tasks with diverse MR datasets acquired from 21 sites with different field strengths, scanner platforms, and acquisition protocols.
translated by 谷歌翻译
When presented with a data stream of two statistically dependent variables, predicting the future of one of the variables (the target stream) can benefit from information about both its history and the history of the other variable (the source stream). For example, fluctuations in temperature at a weather station can be predicted using both temperatures and barometric readings. However, a challenge when modelling such data is that it is easy for a neural network to rely on the greatest joint correlations within the target stream, which may ignore a crucial but small information transfer from the source to the target stream. As well, there are often situations where the target stream may have previously been modelled independently and it would be useful to use that model to inform a new joint model. Here, we develop an information bottleneck approach for conditional learning on two dependent streams of data. Our method, which we call Transfer Entropy Bottleneck (TEB), allows one to learn a model that bottlenecks the directed information transferred from the source variable to the target variable, while quantifying this information transfer within the model. As such, TEB provides a useful new information bottleneck approach for modelling two statistically dependent streams of data in order to make predictions about one of them.
translated by 谷歌翻译
Reinforcement learning (RL) algorithms have achieved notable success in recent years, but still struggle with fundamental issues in long-term credit assignment. It remains difficult to learn in situations where success is contingent upon multiple critical steps that are distant in time from each other and from a sparse reward; as is often the case in real life. Moreover, how RL algorithms assign credit in these difficult situations is typically not coded in a way that can rapidly generalize to new situations. Here, we present an approach using offline contrastive learning, which we call contrastive introspection (ConSpec), that can be added to any existing RL algorithm and addresses both issues. In ConSpec, a contrastive loss is used during offline replay to identify invariances among successful episodes. This takes advantage of the fact that it is easier to retrospectively identify the small set of steps that success is contingent upon than it is to prospectively predict reward at every step taken in the environment. ConSpec stores this knowledge in a collection of prototypes summarizing the intermediate states required for success. During training, arrival at any state that matches these prototypes generates an intrinsic reward that is added to any external rewards. As well, the reward shaping provided by ConSpec can be made to preserve the optimal policy of the underlying RL agent. The prototypes in ConSpec provide two key benefits for credit assignment: (1) They enable rapid identification of all the critical states. (2) They do so in a readily interpretable manner, enabling out of distribution generalization when sensory features are altered. In summary, ConSpec is a modular system that can be added to any existing RL algorithm to improve its long-term credit assignment.
translated by 谷歌翻译
Researchers produce thousands of scholarly documents containing valuable technical knowledge. The community faces the laborious task of reading these documents to identify, extract, and synthesize information. To automate information gathering, document-level question answering (QA) offers a flexible framework where human-posed questions can be adapted to extract diverse knowledge. Finetuning QA systems requires access to labeled data (tuples of context, question and answer). However, data curation for document QA is uniquely challenging because the context (i.e. answer evidence passage) needs to be retrieved from potentially long, ill-formatted documents. Existing QA datasets sidestep this challenge by providing short, well-defined contexts that are unrealistic in real-world applications. We present a three-stage document QA approach: (1) text extraction from PDF; (2) evidence retrieval from extracted texts to form well-posed contexts; (3) QA to extract knowledge from contexts to return high-quality answers -- extractive, abstractive, or Boolean. Using QASPER for evaluation, our detect-retrieve-comprehend (DRC) system achieves a +7.19 improvement in Answer-F1 over existing baselines while delivering superior context selection. Our results demonstrate that DRC holds tremendous promise as a flexible framework for practical scientific document QA.
translated by 谷歌翻译
微分方程的解决方案具有重要的科学和工程意义。物理知识的神经网络(PINN)已成为解决微分方程的有前途方法,但它们缺乏使用任何特定损失函数的理论理由。这项工作提出了微分方程gan(DEQGAN),这是一种使用生成对抗网络来求解微分方程的新方法,以“学习损失函数”以优化神经网络。在十二个普通和部分微分方程的套件上呈现结果,包括非线性汉堡,艾伦·卡恩,汉密尔顿和改良的爱因斯坦的重力方程,我们表明deqgan可以比使用$ pinn的均方一数级别的均方一数级别。 L_2 $,$ L_1 $和HUBER损失功能。我们还表明,Deqgan可以实现与流行数值方法竞争的解决方案精确度。最后,我们提出了两种方法,以提高Deqgan对不同的高参数设置的鲁棒性。
translated by 谷歌翻译